The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We propose a Cascaded Buffered IoU (C-BIoU) tracker to track multiple objects that have irregular motions and indistinguishable appearances. When appearance features are unreliable and geometric features are confused by irregular motions, applying conventional Multiple Object Tracking (MOT) methods may generate unsatisfactory results. To address this issue, our C-BIoU tracker adds buffers to expand the matching space of detections and tracks, which mitigates the effect of irregular motions in two aspects: one is to directly match identical but non-overlapping detections and tracks in adjacent frames, and the other is to compensate for the motion estimation bias in the matching space. In addition, to reduce the risk of overexpansion of the matching space, cascaded matching is employed: first matching alive tracks and detections with a small buffer, and then matching unmatched tracks and detections with a large buffer. Despite its simplicity, our C-BIoU tracker works surprisingly well and achieves state-of-the-art results on MOT datasets that focus on irregular motions and indistinguishable appearances. Moreover, the C-BIoU tracker is the dominant component for our 2-nd place solution in the CVPR'22 SoccerNet MOT and ECCV'22 MOTComplex DanceTrack challenges. Finally, we analyze the limitation of our C-BIoU tracker in ablation studies and discuss its application scope.
translated by 谷歌翻译
This is our 2nd-place solution for the ECCV 2022 Multiple People Tracking in Group Dance Challenge. Our method mainly includes two steps: online short-term tracking using our Cascaded Buffer-IoU (C-BIoU) Tracker, and, offline long-term tracking using appearance feature and hierarchical clustering. Our C-BIoU tracker adds buffers to expand the matching space of detections and tracks, which mitigates the effect of irregular motions in two aspects: one is to directly match identical but non-overlapping detections and tracks in adjacent frames, and the other is to compensate for the motion estimation bias in the matching space. In addition, to reduce the risk of overexpansion of the matching space, cascaded matching is employed: first matching alive tracks and detections with a small buffer, and then matching unmatched tracks and detections with a large buffer. After using our C-BIoU for online tracking, we applied the offline refinement introduced by ReMOTS.
translated by 谷歌翻译
This is our second-place solution for CVPR 2022 SoccerNet Tracking Challenge. Our method mainly includes two steps: online short-term tracking using our Cascaded Buffer-IoU (C-BIoU) Tracker, and, offline long-term tracking using appearance feature and hierarchical clustering. At each step, online tracking yielded HOTA scores near 90, and offline tracking further improved HOTA scores to around 93.2.
translated by 谷歌翻译
区分计算机生成(CG)和自然摄影图像(PG)图像对于验证数字图像的真实性和独创性至关重要。但是,最近的尖端生成方法使CG图像中的合成质量很高,这使得这项具有挑战性的任务变得更加棘手。为了解决这个问题,提出了具有深层质地和高频特征的联合学习策略,以进行CG图像检测。我们首先制定并深入分析CG和PG图像的不同采集过程。基于这样的发现,即图像采集中的多个不同模块将导致对图像中基于卷积神经网络(CNN)渲染的不同敏感性不一致,我们提出了一个深层纹理渲染模块,以增强纹理差异和歧视性纹理表示。具体而言,生成语义分割图来指导仿射转换操作,该操作用于恢复输入图像不同区域中的纹理。然后,原始图像和原始图像和渲染图像的高频组件的组合被馈入配备了注意机制的多支球神经网络,该神经网络分别优化了中间特征,并分别促进了空间和通道维度的痕量探索。在两个公共数据集和一个具有更现实和多样化图像的新构建的数据集上进行的广泛实验表明,所提出的方法的表现优于现有方法,从而明确的余量。此外,结果还证明了拟议方法后处理操作和生成对抗网络(GAN)生成的图像的检测鲁棒性和泛化能力。
translated by 谷歌翻译
对于机器人来说,拾取透明的对象仍然是一项具有挑战性的任务。透明对象(例如反射和折射)的视觉属性使依赖相机传感的当前抓握方法无法检测和本地化。但是,人类可以通过首先观察其粗剖面,然后戳其感兴趣的区域以获得良好的抓握轮廓来很好地处理透明的物体。受到这一点的启发,我们提出了一个新颖的视觉引导触觉框架,以抓住透明的物体。在拟议的框架中,首先使用分割网络来预测称为戳戳区域的水平上部区域,在该区域中,机器人可以在该区域戳入对象以获得良好的触觉读数,同时导致对物体状态的最小干扰。然后,使用高分辨率胶触觉传感器进行戳戳。鉴于触觉阅读有所改善的当地概况,计划掌握透明物体的启发式掌握。为了减轻对透明对象的现实世界数据收集和标记的局限性,构建了一个大规模逼真的合成数据集。广泛的实验表明,我们提出的分割网络可以预测潜在的戳戳区域,平均平均精度(地图)为0.360,而视觉引导的触觉戳戳可以显着提高抓地力成功率,从38.9%到85.2%。由于其简单性,我们提出的方法也可以被其他力量或触觉传感器采用,并可以用于掌握其他具有挑战性的物体。本文中使用的所有材料均可在https://sites.google.com/view/tactilepoking上获得。
translated by 谷歌翻译
低光图像噪声和色差的问题是对象检测,语义分割,实例分割等任务的挑战性问题。在本文中,我们提出了用于低照明增强的算法。KIND-LE使用网络结构中的光曲线估计模块来增强视网膜分解图像中的照明图,从而改善图像亮度。我们提出了照明图和反射图融合模块,以恢复恢复的图像细节并减少细节损失。最后,我们包括了消除噪声的总变化损失函数。我们的方法将GLADNET数据集用作训练集,而LOL数据集则是测试集,并使用Exdark作为下游任务的数据集进行了验证。基准上的广泛实验证明了我们方法的优势,并且接近最先进的结果,该结果的PSNR为19.7216,SSIM在指标方面为0.8213。
translated by 谷歌翻译
透明的物体在我们的日常生活中广泛使用,因此机器人需要能够处理它们。但是,透明的物体遭受了光反射和折射的影响,这使得获得执行操控任务所需的准确深度图的挑战。在本文中,我们提出了一个基于负担能力的新型框架,用于深度重建和操纵透明物体,称为A4T。层次负担能力首先用于检测透明对象及其相关的负担,以编码对象不同部分的相对位置。然后,鉴于预测的负担映射,多步深度重建方法用于逐步重建透明对象的深度图。最后,使用重建的深度图用于基于负担的透明物体操纵。为了评估我们提出的方法,我们构建了一个真实的数据集trans-frans-frans-fans-and-trans-trans-frastance和透明对象的深度图,这是同类物体中的第一个。广泛的实验表明,我们提出的方法可以预测准确的负担能图,并显着改善了与最新方法相比的透明物体的深度重建,其根平方平方误差在0.097米中显着降低至0.042。此外,我们通过一系列机器人操纵实验在透明物体上进行了提出的方法的有效性。请参阅https://sites.google.com/view/affordance4trans的补充视频和结果。
translated by 谷歌翻译
最近,传输位更少的高质量视频会议已成为一个非常炎热且充满挑战的问题。我们提出了FAIVCONF,这是一个基于有效的神经人类面部生成技术的特殊设计的视频会议视频压缩框架。FAIVCONF汇集了几种设计,以改善实际视频会议场景中的系统鲁棒性:面部交换,以避免在背景动画中进行工件;面部模糊以减少传输位量并保持提取的面部地标的质量;面部视图插值的动态源更新,以适应各种头部姿势。我们的方法在视频会议上实现了显着的比率降低,与H.264和H.265编码方案相比,在相同的位速率下可提供更好的视觉质量。
translated by 谷歌翻译
许多现实世界的应用程序都可以作为多机构合作问题进行配置,例如网络数据包路由和自动驾驶汽车的协调。深入增强学习(DRL)的出现为通过代理和环境的相互作用提供了一种有前途的多代理合作方法。但是,在政策搜索过程中,传统的DRL解决方案遭受了多个代理具有连续动作空间的高维度。此外,代理商政策的动态性使训练非平稳。为了解决这些问题,我们建议采用高级决策和低水平的个人控制,以进行有效的政策搜索,提出一种分层增强学习方法。特别是,可以在高级离散的动作空间中有效地学习多个代理的合作。同时,低水平的个人控制可以减少为单格强化学习。除了分层增强学习外,我们还建议对手建模网络在学习过程中对其他代理的政策进行建模。与端到端的DRL方法相反,我们的方法通过以层次结构将整体任务分解为子任务来降低学习的复杂性。为了评估我们的方法的效率,我们在合作车道变更方案中进行了现实世界中的案例研究。模拟和现实世界实验都表明我们的方法在碰撞速度和收敛速度中的优越性。
translated by 谷歌翻译